30 research outputs found

    Paris Call: Important Commitment Against The Arms Race With Cyber Weapons

    Get PDF
    The following text by Ninja Marnau was published on 14.12.2018 in the Tagesspiegel - Background Digitalisierung (in a shortened version) under the title "Disclosing security vulnerabilities instead of selling them"

    Comments on the “Draft Ethics Guidelines for Trustworthy AI” by the High-Level Expert Group on Artificial Intelligence.

    Get PDF
    The European Commission appointed the High-Level Expert Group on Artificial Intelligence (AI HLEG). The AI HLEG has the objective to support the implementation of the European strategy on Artificial Intelligence. This will include the elaboration of recommendations on future-related policy development and on ethical, legal and societal issues related to AI. In January 2019, the Commission asked stakeholders for comments on the AI HLEG’s “Draft Ethics Guidelines for Trustworthy AI”. CISPA submitted the following comments and remarks in the Stakeholders’ Consultation

    Die Blockchain im Spannungsfeld der Grundsätze der Datenschutzgrundverordnung

    Get PDF
    Blockchain-Technologie und auf ihr basierende Smart Contracts erleben aktuell große Aufmerksamkeit. Egal ob Finanztransaktionen, eHealth oder eGovernment, für zahlreiche Anwendungsfelder wird der Einsatz von Blockchain-Technologie vorgeschlagen. In jüngster Zeit mehren sich jedoch auch kritische Stimmen, die die kryptographischen und Konsens-Prinzipien dieser Technologie für unvereinbar mit der Verarbeitung personenbezogener Daten und somit den Grundsätzen des Datenschutzrechts halten. Ziel dieses Beitrags ist es, die Eignung verschiedener Blockchain-Technologien für die Erfüllung der Grundsätze der Verarbeitung personenbezogener Daten gemäß Artikel 5 der EU Datenschutzgrundverordnung wie u.a. Rechenschaftspflicht, Speicherbegrenzung und Betroffenenrechten zu analysieren

    PriCL: Creating a Precedent A Framework for Reasoning about Privacy Case Law

    Full text link
    We introduce PriCL: the first framework for expressing and automatically reasoning about privacy case law by means of precedent. PriCL is parametric in an underlying logic for expressing world properties, and provides support for court decisions, their justification, the circumstances in which the justification applies as well as court hierarchies. Moreover, the framework offers a tight connection between privacy case law and the notion of norms that underlies existing rule-based privacy research. In terms of automation, we identify the major reasoning tasks for privacy cases such as deducing legal permissions or extracting norms. For solving these tasks, we provide generic algorithms that have particularly efficient realizations within an expressive underlying logic. Finally, we derive a definition of deducibility based on legal concepts and subsequently propose an equivalent characterization in terms of logic satisfiability.Comment: Extended versio

    Stellungnahme gegenüber dem Landtag des Saarlandes zum Thema „Stand und Entwicklungsperspektiven der Telemedizin im Saarland“

    Get PDF
    Wissenschaftler des CISPA nehmen wir anlässlich der Einladung zur Anhörung des Ausschusses für Soziales, Gesundheit, Frauen und Familie zum Thema „Stand und Entwicklungsperspektiven der Telemedizin im Saarland“ Stellung. Wir beschränken uns dabei auf die Kernexpertise des CISPA, d.h. auf Fragestellungen mit Bezug zur IT-Sicherheit und Datenschutz

    Share First, Ask Later (or Never?) - Studying Violations of GDPR's Explicit Consent in Android Apps

    Get PDF
    Since the General Data Protection Regulation (GDPR) went into effect in May 2018, online services are required to obtain users' explicit consent before sharing users' personal data with third parties that use the data for their own purposes. While violations of this legal basis on the Web have been studied in-depth, the community lacks insight into such violations in the mobile ecosystem. We perform the first large-scale measurement on mobile apps in the wild to understand the current state of the violation of GDPR's explicit consent. Specifically, we build an automated pipeline to detect data sent out to the Internet without prior consent and apply it to a set of 86,163 Android apps. Based on the domains that receive data protected under the GDPR without prior consent, we collaborate with a legal scholar to assess if these contacted domains are third-party data controllers. Doing so, we find 24,838 apps send personal data towards data controllers without the user's explicit prior consent. To understand the reasons behind this, we run a notification campaign to inform affected developers and gather insights from their responses. We then conduct an in-depth analysis of violating apps, the corresponding third parties' documentation, and privacy policies. Based on the responses and our analysis of available documentation, we derive concrete recommendations for all involved entities in the ecosystem to allow data subjects to exercise their fundamental rights and freedoms

    Comparing Large-Scale Privacy and Security Notifications

    Get PDF
    Over the last decade, web security research has used notification campaigns as a tool to help web operators fix security problems or stop infrastructure abuse. First attempts at applying this approach to privacy issues focused on single services or vendors. Hence, little is known if notifications can also raise awareness and encourage remediation of more complex, vendor-independent violations of privacy legislation at scale, such as informed consent to cookie usage under the EU's ePrivacy Directive or the General Data Protection Regulation's requirement for a privacy policy. It is also unclear how privacy notifications perform and are perceived compared to those about security vulnerabilities. To fill this research gap, we conduct a large-scale, automated email notification study with more than 115K websites we notify about lack of a privacy policy, use of third-party cookies without or before informed consent, and input forms for personal data that do not use HTTPS. We investigate the impact of warnings about fines and compare the results with security notifications to more than 40K domains about openly accessible Git repositories. Based on our measurements and interactions with operators through email and a survey, we find that notifications about privacy issues are not as well received as security notifications. They result in lower fix rates, less incentive to take immediate action, and more negative feedback. Specific reasons include a lack of awareness and knowledge of privacy laws' applicability, difficulties to pinpoint the problem, and limited intrinsic motivation
    corecore